CoCALC: A Self-Supervised Visual Place Recognition Approach Combining Appearance and Geometric Information

نویسندگان

چکیده

Visual place recognition (VPR) is considered among the most complicated tasks in SLAM due to multiple challenges of drastic variations both appearance and viewpoint. To address this issue, article presents a self-supervised lightweight VPR approach (namely CoCALC) that fully utilizes geometric information provided by images. The main thing makes CoCALC ultra-lightweight (only 0.27 MB) our use Depthwise Separable Convolution (DSC), simple but effective architecture enables model generate more robust image representation. network trained specifically for can efficiently extract deep convolutional features from salient regions have relatively higher entropy, thereby expanding its applications on resource-limited platforms without GPUs. further eliminate negative consequences high percent false matches, novel band-matrix-based check employed filter out incorrect matching patches, impact different bandwidths recall rate discussed. Results several benchmark datasets confirm proposed yield state-of-the-art performance superior generalization with acceptable efficiency. All relevant codes are at https://github.com/LiKangyuLKY/CoCALC-VPR studies.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Self-Supervised Visual Place Recognition Learning in Mobile Robots

Place recognition is a critical component in robot navigation that enables it to re-establish previously visited locations, and simultaneously use this information to correct the drift incurred in its dead-reckoned estimate. In this work, we develop a self-supervised approach to place recognition in robots. The task of visual loop-closure identification is cast as a metric learning problem, whe...

متن کامل

Hierarchical Sparse Coding With Geometric Prior For Visual Place Recognition

We address the problem of estimating place information of an image using principles from automated representation learning. We pursue a hierarchical sparse coding approach that learns features useful in discriminating images across places, by initializing it with a geometric prior corresponding to transformations between image appearance space and their corresponding place grouping space using ...

متن کامل

Place Recognition using Near and Far Visual Information ?

In this paper we show how to carry out robust place recognition using both near and far information provided by a stereo camera. Visual appearance is known to be very useful in place recognition tasks. In recent years, it has been shown that taking geometric information also into account further improves system robustness. Stereo visual systems provide 3D information and texture of nearby regio...

متن کامل

HMM-based audio-visual speech recognition integrating geometric and appearance-based visual features

A good front end for visual feature extraction is an important element of audio-visual speech recognition systems. We propose a new visual feature representation that combines both geometricand pixel-based features. Using our previously developed contour-based lip-tracking algorithm, geometric features including the height and width of the lips are automatically extracted. Lip boundary tracking...

متن کامل

FAB-MAP: Appearance-Based Place Recognition and Mapping using a Learned Visual Vocabulary Model

We present an overview of FAB-MAP, an algorithm for place recognition and mapping developed for infrastructure-free mobile robot navigation in large environments. The system allows a robot to identify when it is revisiting a previously seen location, on the basis of imagery captured by the robot’s camera. We outline a complete probabilistic framework for the task, which is applicable even in vi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Access

سال: 2023

ISSN: ['2169-3536']

DOI: https://doi.org/10.1109/access.2023.3246803